A Column Generation Algorithm For Boosting
نویسندگان
چکیده
We examine linear program (LP) approaches to boosting and demonstrate their efficient solution using LPBoost, a column generation simplex method. We prove that minimizing the soft margin error function (equivalent to solving an LP) directly optimizes a generalization error bound. LPBoost can be used to solve any boosting LP by iteratively optimizing the dual classification costs in a restricted LP and dynamically generating weak learners to make new LP columns. Unlike gradient boosting algorithms, LPBoost converges finitely to a global solution using well defined stopping criteria. Computationally, LPBoost finds very sparse solutions as good as or better than those found by ADABoost using comparable computation.
منابع مشابه
Fast Training of Effective Multi-class Boosting Using Coordinate Descent Optimization
We present a novel column generation based boosting method for multi-class classification. Our multi-class boosting is formulated in a single optimization problem as in [1, 2]. Different from most existing multi-class boosting methods, which use the same set of weak learners for all the classes, we train class specified weak learners (i.e., each class has a different set of weak learners). We s...
متن کاملA Genetic Algorithm for Choice-Based Network Revenue Management
In recent years, enriching traditional revenue management models by considering the customer choice behavior has been a main challenge for researchers. The terminology for the airline application is used as representative of the problem. A popular and an efficient model considering these behaviors is choice-based deterministic linear programming (CDLP). This model assumes that each customer bel...
متن کاملMargin Distribution Controlled Boosting
Schapire’s margin theory provides a theoretical explanation to the success of boosting-type methods and manifests that a good margin distribution (MD) of training samples is essential for generalization. However the statement that a MD is good is vague, consequently, many recently developed algorithms try to generate a MD in their goodness senses for boosting generalization. Unlike their indire...
متن کاملDirect 0-1 Loss Minimization and Margin Maximization with Boosting
We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak cla...
متن کاملLACBoost and FisherBoost: Optimally Building Cascade Classifiers
Object detection is one of the key tasks in computer vision. The cascade framework of Viola and Jones has become the de facto standard. A classifier in each node of the cascade is required to achieve extremely high detection rates, instead of low overall classification error. Although there are a few reported methods addressing this requirement in the context of object detection, there is no a ...
متن کامل